连续归一化流(CNF)是一类生成模型,可以通过求解普通的微分方程(ODE)将先验分布转换为模型分布。我们建议通过最大程度地减少概率路径差异(PPD)来训练CNF,这是CNF产生的概率密度路径与目标概率密度路径之间的新型差异家族。 PPD是使用对数质量保护公式制定的,该公式是线性的一阶部分微分方程,将对数目标概率和CNF的定义向量场进行配方。 PPD比现有方法具有多个关键好处:它避免了在迭代中解决颂歌的需求,很容易应用于歧管数据,比例到高维度,并与大型目标路径兼容,该目标路径在有限的时间内插值纯噪声和数据。从理论上讲,PPD显示为结合经典概率差异。从经验上讲,我们表明,通过最小化PPD实现最新的CNF在现有的低维歧管基准上获得了最新的可能性和样品质量,并且是生成模型以扩展到中度高维歧管的第一个示例。
translated by 谷歌翻译
我们研究了摊销优化的使用来预测输入度量的最佳运输(OT)图,我们称之为元。通过利用过去问题的知识和信息来快速预测和解决新问题,这有助于反复解决不同措施之间的类似OT问题。否则,标准方法忽略了过去解决方案的知识,并从头开始重新解决每个问题。元模型在离散设置中超过了log-sinkhorn求解器的标准收敛速率,并在连续设置中凸电势。我们通过在图像,球形数据和调色板之间的离散和连续传输设置中多个数量级来改善标准ot求解器的计算时间。我们的源代码可在http://github.com/facebookresearch/meta-ot上找到。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
逆钢筋学习尝试在马尔可夫决策问题中重建奖励功能,使用代理操作的观察。正如Russell [1998]在Russell [1998]的那样,问题均为不良,即使在存在有关最佳行为的完美信息的情况下,奖励功能也无法识别。我们为熵正则化的问题提供了解决这种不可识别性的分辨率。对于给定的环境,我们完全表征了导致给定政策的奖励函数,并证明,在两个不同的折扣因子下或在足够的不同环境下给出了相同奖励的行动的示范,可以恢复不可观察的奖励。我们还向有限视野进行时间均匀奖励的一般性和充分条件,以及行动无关的奖励,概括Kim等人的最新结果。[2021]和Fu等人。[2018]。
translated by 谷歌翻译
多边缘最佳运输使人们能够比较多种概率措施,这些措施越来越多地发现在多任务学习问题中的应用。多边缘运输的一个实际限制是测量,样品和维度数量的计算可扩展性。在这项工作中,我们提出了一种基于随机一维投影的多边缘最佳运输范例,其(广义)距离我们术语切片的多边缘Wasserstein距离。为了构建该距离,我们介绍了一维多边缘Kantorovich问题的表征,并使用它来突出切片的多边缘Wasserstein距离的许多属性。特别是,我们表明(i)切片的多边缘Wasserstein距离是一种(概括的)指标,其诱导与标准的Wasserstein距离相同的拓扑,(ii)它承认无维样本复杂度,(iii)是与切片沃斯斯坦度量标准下的双重Centric的问题紧密连接。我们通过说明切片的多边缘Wasserstein对多任务密度估计和多动力增强学习问题的结论。
translated by 谷歌翻译
我们将一般的多军匪徒问题视为一个相关(和简单的上下文和不安)元素,是一个放松的控制问题。通过引入熵正则化,我们获得了对值函数的平滑渐近近似。这产生了最佳决策过程的新型半指数近似。该半指数可以被解释为明确平衡探索 - 探索 - 探索权衡取舍,就像乐观的(UCB)原则中,学习溢价明确描述了环境中可用的信息的不对称性和奖励功能中的非线性。所得的渐近随机对照(ARC)算法的性能与其他相关的多臂匪徒的方法相比有利。
translated by 谷歌翻译
Classical methods for acoustic scene mapping require the estimation of time difference of arrival (TDOA) between microphones. Unfortunately, TDOA estimation is very sensitive to reverberation and additive noise. We introduce an unsupervised data-driven approach that exploits the natural structure of the data. Our method builds upon local conformal autoencoders (LOCA) - an offline deep learning scheme for learning standardized data coordinates from measurements. Our experimental setup includes a microphone array that measures the transmitted sound source at multiple locations across the acoustic enclosure. We demonstrate that LOCA learns a representation that is isometric to the spatial locations of the microphones. The performance of our method is evaluated using a series of realistic simulations and compared with other dimensionality-reduction schemes. We further assess the influence of reverberation on the results of LOCA and show that it demonstrates considerable robustness.
translated by 谷歌翻译
Software Defect Prediction aims at predicting which software modules are the most probable to contain defects. The idea behind this approach is to save time during the development process by helping find bugs early. Defect Prediction models are based on historical data. Specifically, one can use data collected from past software distributions, or Versions, of the same target application under analysis. Defect Prediction based on past versions is called Cross Version Defect Prediction (CVDP). Traditionally, Static Code Metrics are used to predict defects. In this work, we use the Class Dependency Network (CDN) as another predictor for defects, combined with static code metrics. CDN data contains structural information about the target application being analyzed. Usually, CDN data is analyzed using different handcrafted network measures, like Social Network metrics. Our approach uses network embedding techniques to leverage CDN information without having to build the metrics manually. In order to use the embeddings between versions, we incorporate different embedding alignment techniques. To evaluate our approach, we performed experiments on 24 software release pairs and compared it against several benchmark methods. In these experiments, we analyzed the performance of two different graph embedding techniques, three anchor selection approaches, and two alignment techniques. We also built a meta-model based on two different embeddings and achieved a statistically significant improvement in AUC of 4.7% (p < 0.002) over the baseline method.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译